Composite Adversarial Attacks
نویسندگان
چکیده
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides way to evaluate the adversarial robustness. In practice, algorithms are artificially selected and tuned by human experts break ML system. However, manual selection of attackers tends be sub-optimal, leading mistakenly assessment model security. this paper, new procedure called Composite Attack (CAA) proposed automatically searching best combination their hyper-parameters from candidate pool 32 base attackers. We design search space where policy represented as an attacking sequence, i.e., output previous attacker used initialization input successors. Multi-objective NSGA-II genetic algorithm adopted finding strongest with minimum complexity. The experimental result shows CAA beats 10 top on 11 diverse defenses less elapsed time (6 × faster than AutoAttack), achieves state-of-the-art linf, l2 unrestricted attacks.
منابع مشابه
Adversarial Attacks on Image Recognition
The purpose of this project is to extend the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We investigated whether a reduction in feature dimensionality can maintain a comparable level of misclassification success while increasing computational efficiency. We formed an attack on a black-box model with an unknown training set by forcing the oracle to misclassif...
متن کاملBoosting Adversarial Attacks with Momentum
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because...
متن کاملAdversarial Attacks and Defences Competition
To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the Alexey Kurakin, Ian Goodfellow, Samy Bengio Google Brain Yinpeng Dong, Fangzhou Lia...
متن کاملA Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN’s resilience to adversarial attacks, namely, adversarial training. Our experiments show that d...
متن کاملDivide, Denoise, and Defend against Adversarial Attacks
Deep neural networks, although shown to be a successful class of machine learning algorithms, are known to be extremely unstable to adversarial perturbations. Improving the robustness of neural networks against these attacks is important, especially for security-critical applications. To defend against such attacks, we propose dividing the input image into multiple patches, denoising each patch...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2021
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v35i10.17075